83 research outputs found

    Attractor networks and memory replay of phase coded spike patterns

    Full text link
    We analyse the storage and retrieval capacity in a recurrent neural network of spiking integrate and fire neurons. In the model we distinguish between a learning mode, during which the synaptic connections change according to a Spike-Timing Dependent Plasticity (STDP) rule, and a recall mode, in which connections strengths are no more plastic. Our findings show the ability of the network to store and recall periodic phase coded patterns a small number of neurons has been stimulated. The self sustained dynamics selectively gives an oscillating spiking activity that matches one of the stored patterns, depending on the initialization of the network.Comment: arXiv admin note: text overlap with arXiv:1210.678

    Associative memory of phase-coded spatiotemporal patterns in leaky Integrate and Fire networks

    Get PDF
    We study the collective dynamics of a Leaky Integrate and Fire network in which precise relative phase relationship of spikes among neurons are stored, as attractors of the dynamics, and selectively replayed at differentctime scales. Using an STDP-based learning process, we store in the connectivity several phase-coded spike patterns, and we find that, depending on the excitability of the network, different working regimes are possible, with transient or persistent replay activity induced by a brief signal. We introduce an order parameter to evaluate the similarity between stored and recalled phase-coded pattern, and measure the storage capacity. Modulation of spiking thresholds during replay changes the frequency of the collective oscillation or the number of spikes per cycle, keeping preserved the phases relationship. This allows a coding scheme in which phase, rate and frequency are dissociable. Robustness with respect to noise and heterogeneity of neurons parameters is studied, showing that, since dynamics is a retrieval process, neurons preserve stablecprecise phase relationship among units, keeping a unique frequency of oscillation, even in noisy conditions and with heterogeneity of internal parameters of the units

    Neural Avalanches at the Critical Point between Replay and Non-Replay of Spatiotemporal Patterns

    Get PDF
    We model spontaneous cortical activity with a network of coupled spiking units, in which multiple spatio-temporal patterns are stored as dynamical attractors. We introduce an order parameter, which measures the overlap (similarity) between the activity of the network and the stored patterns. We find that, depending on the excitability of the network, different working regimes are possible. For high excitability, the dynamical attractors are stable, and a collective activity that replays one of the stored patterns emerges spontaneously, while for low excitability, no replay is induced. Between these two regimes, there is a critical region in which the dynamical attractors are unstable, and intermittent short replays are induced by noise. At the critical spiking threshold, the order parameter goes from zero to one, and its fluctuations are maximized, as expected for a phase transition (and as observed in recent experimental results in the brain). Notably, in this critical region, the avalanche size and duration distributions follow power laws. Critical exponents are consistent with a scaling relationship observed recently in neural avalanches measurements. In conclusion, our simple model suggests that avalanche power laws in cortical spontaneous activity may be the effect of a network at the critical point between the replay and non-replay of spatio-temporal patterns

    Storage of phase-coded patterns via STDP in fully-connected and sparse network: a study of the network capacity

    Get PDF
    We study the storage and retrieval of phase-coded patterns as stable dynamical attractors in recurrent neural networks, for both an analog and a integrate-and-fire spiking model. The synaptic strength is determined by a learning rule based on spike-time-dependent plasticity, with an asymmetric time window depending on the relative timing between pre- and post-synaptic activity. We store multiple patterns and study the network capacity. For the analog model, we find that the network capacity scales linearly with the network size, and that both capacity and the oscillation frequency of the retrieval state depend on the asymmetry of the learning time window. In addition to fully-connected networks, we study sparse networks, where each neuron is connected only to a small number z << N of other neurons. Connections can be short range, between neighboring neurons placed on a regular lattice, or long range, between randomly chosen pairs of neurons. We find that a small fraction of long range connections is able to amplify the capacity of the network. This imply that a small-world-network topology is optimal, as a compromise between the cost of long range connections and the capacity increase. Also in the spiking integrate and fire model the crucial result of storing and retrieval of multiple phase-coded patterns is observed. The capacity of the fully-connected spiking network is investigated, together with the relation between oscillation frequency of retrieval state and window asymmetry

    On-line learning of unrealizable tasks

    Get PDF
    The dynamics of on-line learning is investigated for structurally unrealizable tasks in the context of two-layer neural networks with an arbitrary number of hidden neurons. Within a statistical mechanics framework, a closed set of differential equations describing the learning dynamics can be derived, for the general case of unrealizable isotropic tasks. In the asymptotic regime one can solve the dynamics analytically in the limit of large number of hidden neurons, providing an analytical expression for the residual generalization error, the optimal and critical asymptotic training parameters, and the corresponding prefactor of the generalization error decay

    Recurrence of spatio-temporal patterns of spikes and neural avalanches at the critical point of a non-equilibrium phase transition

    Get PDF
    Recently, many experimental results have supported the idea that the brain operates near a critical point [1-5], as reflected by the power laws of avalanche size distributions and maximization of fluctuations. Several models have been proposed as explanations for the power law distributions that emerge in spontaneous cortical activity [6,5]. Models based on branching processes and on self-organized criticality are the most relevant. However, there are additional features of neuronal avalanches that are not captured in these models, such as the stable recurrence of particular spatiotemporal patterns and the conditions under which these precise and diverse patterns can be retrieved [4]. Indeed, neuronal avalanches are highly repeatable and can be clustered into statistically significant families of activity patterns that satisfy several requirements of a memory substrate. In many areas of the brain having different brain functionality, repeatable precise spatiotemporal patterns of spikes seem to play a crucial role in the coding and storage of information. Many in vitro and in vivo studies have demonstrated that cortical spontaneous activity occurs in precise spatiotemporal patterns, which often reflect the activity produced by external or sensory inputs. The temporally structured replay of spatiotemporal patterns has been observed to occur, both in the cortex and hippocampus, during sleep and in the awake state, and it has been hypothesized that this replay may subserve memory consolidation. Previous studies have separately addressed the topics of phase-coded memory storage and neuronal avalanches, but our work is the first to show how these ideas converge in a single cortical model. We study a network of leaky integrate- and-fire (LIF) neurons, whose synaptic connections are designed with a rule based on spike-timing dependent plasticity (STDP). The network works as an associative memory of phase-coded spatiotemporal patterns, whose storage capacity has been studied in [7]. In this paper, we study the spontaneous dynamics when the excitability of the model is tuned to be at the critical point of a phase transition, between the successful persistent replay and non-replay of encoded patterns. We introduce an order parameter, which measures the overlap (similarity) between the activity of the network and the stored patterns. We find that, depending on the excitability of the network, different working regimes are possible. For high excitability, the dynamical attractors are stable, and a collective activity that replays one of the stored patterns emerges spontaneously, while for low excitability, no replay is induced. Between these two regimes, there is a critical region in which the dynamical attractors are unstable, and intermittent short replays are induced by noise. At the critical spiking threshold, the order parameter goes from zero to one, and its fluctuations are maximized, as expected for a phase transition (and as observed in recent experimental results in the brain [5]). Notably, in this critical region, the avalanche size and duration distributions follow power laws

    Natural gradient matrix momentum

    Get PDF
    Natural gradient learning is an efficient and principled method for improving on-line learning. In practical applications there will be an increased cost required in estimating and inverting the Fisher information matrix. We propose to use the matrix momentum algorithm in order to carry out efficient inversion and study the efficacy of a single step estimation of the Fisher information matrix. We analyse the proposed algorithm in a two-layer network, using a statistical mechanics framework which allows us to describe analytically the learning dynamics, and compare performance with true natural gradient learning and standard gradient descent

    Effects of Noise in a Cortical Neural Model

    Full text link
    Recently Segev et al. (Phys. Rev. E 64,2001, Phys.Rev.Let. 88, 2002) made long-term observations of spontaneous activity of in-vitro cortical networks, which differ from predictions of current models in many features. In this paper we generalize the EI cortical model introduced in a previous paper (S.Scarpetta et al. Neural Comput. 14, 2002), including intrinsic white noise and analyzing effects of noise on the spontaneous activity of the nonlinear system, in order to account for the experimental results of Segev et al.. Analytically we can distinguish different regimes of activity, depending from the model parameters. Using analytical results as a guide line, we perform simulations of the nonlinear stochastic model in two different regimes, B and C. The Power Spectrum Density (PSD) of the activity and the Inter-Event-Interval (IEI) distributions are computed, and compared with experimental results. In regime B the network shows stochastic resonance phenomena and noise induces aperiodic collective synchronous oscillations that mimic experimental observations at 0.5 mM Ca concentration. In regime C the model shows spontaneous synchronous periodic activity that mimic activity observed at 1 mM Ca concentration and the PSD shows two peaks at the 1st and 2nd harmonics in agreement with experiments at 1 mM Ca. Moreover (due to intrinsic noise and nonlinear activation function effects) the PSD shows a broad band peak at low frequency. This feature, observed experimentally, does not find explanation in the previous models. Besides we identify parametric changes (namely increase of noise or decreasing of excitatory connections) that reproduces the fading of periodicity found experimentally at long times, and we identify a way to discriminate between those two possible effects measuring experimentally the low frequency PSD.Comment: 25 pages, 10 figures, to appear in Phys. Rev.

    Role of topology in the spontaneous cortical activity

    Get PDF
    Scarpetta et al. BMC Neuroscience 2015, 16(Suppl 1):P6 http://www.biomedcentral.com/1471-2202/16/S1/P
    corecore